Doubly majorized algorithm for sparsity-inducing optimization problems with regularizer-compatible constraints

نویسندگان

چکیده

We consider a class of sparsity-inducing optimization problems whose constraint set is regularizer-compatible, in the sense that, becomes easy-to-project-onto after coordinate transformation induced by regularizer. Our model general enough to cover, as special cases, ordered LASSO Tibshirani and Suo (Technometrics 58:415–423, 2016) its variants with some commonly used nonconvex regularizers. The presence both regularizer poses challenges on design efficient algorithms. In this paper, exploiting absolute-value symmetry other properties regularizer, we propose new algorithm, called doubly majorized algorithm (DMA), for problems. DMA makes use projections onto each iteration, hence can be performed efficiently. Without invoking any qualification conditions such those based horizon subdifferentials, show that accumulation point sequence generated so-called $$\psi _\textrm{opt}$$ -stationary point, notion stationarity define inspired L-stationarity Beck Eldar (SIAM J Optim 23:1480–1509, 2013) Hallak (Math Oper Res 41:196–223, . also global minimizer our has again without imposing conditions. Finally, illustrate numerically performance solving

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints

We study the problem of minimizing the expected loss of a linear predictor while constraining its sparsity, i.e., bounding the number of features used by the predictor. While the resulting optimization problem is generally NP-hard, several approximation algorithms are considered. We analyze the performance of these algorithms, focusing on the characterization of the trade-off between accuracy a...

متن کامل

Optimization with Sparsity-Inducing Penalties

Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appr...

متن کامل

Convex Optimization with Mixed Sparsity-inducing Norm

Sparsity-inducing norm has been a powerful tool for learning robust models with limited data in high dimensional space. By imposing such norms as constraints or regularizers in an optimization setting, one could bias the model towards learning sparse solutions, which in many case have been proven to be more statistically efficient [Don06]. Typical sparsityinducing norms include `1 norm [Tib96] ...

متن کامل

Convex Optimization with Sparsity-Inducing Norms

The principle of parsimony is central to many areas of science: the simplest explanation to a given phenomenon should be preferred over more complicated ones. In the context of machine learning, it takes the form of variable or feature selection, and it is commonly used in two situations. First, to make the model or the prediction more interpretable or computationally cheaper to use, i.e., even...

متن کامل

Online Linear Optimization with Sparsity Constraints

We study the problem of online linear optimization with sparsity constraints in the 1 semi-bandit setting. It can be seen as a marriage between two well-known problems: 2 the online linear optimization problem and the combinatorial bandit problem. For 3 this problem, we provide two algorithms which are efficient and achieve sublinear 4 regret bounds. Moreover, we extend our results to two gener...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Optimization and Applications

سال: 2023

ISSN: ['0926-6003', '1573-2894']

DOI: https://doi.org/10.1007/s10589-023-00503-1